Goto

Collaborating Authors

 health information


Google scraps AI search feature that crowdsourced amateur medical advice

The Guardian

Google had said'What People Suggest' feature aimed to provide users with information from people with similar lived experiences. Google had said'What People Suggest' feature aimed to provide users with information from people with similar lived experiences. Google has dropped a new artificial intelligence search feature that gave users crowdsourced health advice from amateurs around the world. The company had said its launch of "What People Suggest", which provided tips from strangers, showed "the potential of AI to transform health outcomes across the globe". But Google has since quietly removed the feature, according to three people familiar with the decision.


Mind launches inquiry into AI and mental health after Guardian investigation

The Guardian

The Guardian revealed how people were being put at risk of harm by false and misleading health information in Google AI Overviews. The Guardian revealed how people were being put at risk of harm by false and misleading health information in Google AI Overviews. Exclusive: England and Wales charity to examine safeguards after Guardian exposed'very dangerous' advice on Google AI Overviews'Very dangerous': a Mind mental health expert on Google's AI summaries Mind is launching a significant inquiry into artificial intelligence and mental health after a Guardian investigation exposed how Google's AI Overviews gave people "very dangerous" medical advice. In a year-long commission, the mental health charity, which operates in England and Wales, will examine the risks and safeguards required as AI increasingly influences the lives of millions of people affected by mental health issues worldwide. The inquiry - the first of its kind globally - will bring together the world's leading doctors and mental health professionals, as well as people with lived experience, health providers, policymakers and tech companies.


Google puts users at risk by downplaying health disclaimers under AI Overviews

The Guardian

Google's AI Overviews only issue a warning if users choose to request additional health information, by selecting'Show more'. Google's AI Overviews only issue a warning if users choose to request additional health information, by selecting'Show more'. Google is putting people at risk of harm by downplaying safety warnings that its AI-generated medical advice may be wrong. When answering queries about sensitive topics such as health, the company says its AI Overviews, which appear above search results, prompt users to seek professional help, rather than relying solely on its summaries. "AI Overviews will inform people when it's important to seek out expert advice or to verify the information presented," Google has said .


How the 'confident authority' of Google AI Overviews is putting public health at risk

The Guardian

How the'confident authority' of Google AI Overviews is putting public health at risk Experts say tool can give'completely wrong' medical advice which could put users at risk of serious harm Do I have the flu or Covid? Why do I wake up feeling tired? What is causing the pain in my chest? For more than two decades, typing medical questions into the world's most popular search engine has served up a list of links to websites with the answers. Google those health queries today and the response will likely be written by artificial intelligence.


'Dangerous and alarming': Google removes some of its AI summaries after users' health put at risk

The Guardian

Google has said AI Overviews, which use generative AI to provide snapshots of information on a topic or question, are'helpful and reliable'. Google has said AI Overviews, which use generative AI to provide snapshots of information on a topic or question, are'helpful and reliable'. 'Dangerous and alarming': Google removes some of its AI summaries after users' health put at risk Google has removed some of its artificial intelligence health summaries after a Guardian investigation found people were being put at risk of harm by false and misleading information. The company has said its AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, are " helpful " and " reliable ". But some of the summaries, which appear at the top of search results, served up inaccurate health information, putting users at risk of harm.


Learning to Call: A Field Trial of a Collaborative Bandit Algorithm for Improved Message Delivery in Mobile Maternal Health

Dasgupta, Arpan, Maniyar, Mizhaan, Srivastava, Awadhesh, Kumar, Sanat, Mahale, Amrita, Hegde, Aparna, Suggala, Arun, Shanmugam, Karthikeyan, Taneja, Aparna, Tambe, Milind

arXiv.org Artificial Intelligence

Mobile health (mHealth) programs utilize automated voice messages to deliver health information, particularly targeting underserved communities, demonstrating the effectiveness of using mobile technology to disseminate crucial health information to these populations, improving health outcomes through increased awareness and behavioral change. India's Kilkari program delivers vital maternal health information via weekly voice calls to millions of mothers. However, the current random call scheduling often results in missed calls and reduced message delivery. This study presents a field trial of a collaborative bandit algorithm designed to optimize call timing by learning individual mothers' preferred call times. We deployed the algorithm with around $6500$ Kilkari participants as a pilot study, comparing its performance to the baseline random calling approach. Our results demonstrate a statistically significant improvement in call pick-up rates with the bandit algorithm, indicating its potential to enhance message delivery and impact millions of mothers across India. This research highlights the efficacy of personalized scheduling in mobile health interventions and underscores the potential of machine learning to improve maternal health outreach at scale.


Speaking at the Right Level: Literacy-Controlled Counterspeech Generation with RAG-RL

Song, Xiaoying, Anik, Anirban Saha, Barua, Dibakar, Luo, Pengcheng, Ding, Junhua, Hong, Lingzi

arXiv.org Artificial Intelligence

Health misinformation spreading online poses a significant threat to public health. Researchers have explored methods for automatically generating counterspeech to health misinformation as a mitigation strategy. Existing approaches often produce uniform responses, ignoring that the health literacy level of the audience could affect the accessibility and effectiveness of counterspeech. We propose a Controlled-Literacy framework using retrieval-augmented generation (RAG) with reinforcement learning (RL) to generate tailored counterspeech adapted to different health literacy levels. In particular, we retrieve knowledge aligned with specific health literacy levels, enabling accessible and factual information to support generation. We design a reward function incorporating subjective user preferences and objective readability-based rewards to optimize counterspeech to the target health literacy level. Experiment results show that Controlled-Literacy outperforms baselines by generating more accessible and user-preferred counterspeech. This research contributes to more equitable and impactful public health communication by improving the accessibility and comprehension of counterspeech to health misinformation


Towards Better Health Conversations: The Benefits of Context-seeking

Sayres, Rory, Hao, Yuexing, Ward, Abbi, Wang, Amy, Freeman, Beverly, Zhan, Serena, Ardila, Diego, Li, Jimmy, Lee, I-Ching, Iurchenko, Anna, Kou, Siyi, Badola, Kartikeya, Hu, Jimmy, Kumar, Bhawesh, Johnson, Keith, Vijay, Supriya, Krogue, Justin, Hassidim, Avinatan, Matias, Yossi, Webster, Dale R., Virmani, Sunny, Liu, Yun, Duong, Quang, Schaekermann, Mike

arXiv.org Artificial Intelligence

Navigating health questions can be daunting in the modern information landscape. Large language models (LLMs) may provide tailored, accessible information, but also risk being inaccurate, biased or misleading. We present insights from 4 mixed-methods studies (total N=163), examining how people interact with LLMs for their own health questions. Qualitative studies revealed the importance of context-seeking in conversational AIs to elicit specific details a person may not volunteer or know to share. Context-seeking by LLMs was valued by participants, even if it meant deferring an answer for several turns. Incorporating these insights, we developed a "Wayfinding AI" to proactively solicit context. In a randomized, blinded study, participants rated the Wayfinding AI as more helpful, relevant, and tailored to their concerns compared to a baseline AI. These results demonstrate the strong impact of proactive context-seeking on conversational dynamics, and suggest design patterns for conversational AI to help navigate health topics.


"What's Up, Doc?": Analyzing How Users Seek Health Information in Large-Scale Conversational AI Datasets

Paruchuri, Akshay, Aziz, Maryam, Vartak, Rohit, Ali, Ayman, Uchehara, Best, Liu, Xin, Chatterjee, Ishan, Agrawal, Monica

arXiv.org Artificial Intelligence

People are increasingly seeking healthcare information from large language models (LLMs) via interactive chatbots, yet the nature and inherent risks of these conversations remain largely unexplored. In this paper, we filter large-scale conversational AI datasets to achieve HealthChat-11K, a curated dataset of 11K real-world conversations composed of 25K user messages. We use HealthChat-11K and a clinician-driven taxonomy for how users interact with LLMs when seeking healthcare information in order to systematically study user interactions across 21 distinct health specialties. Our analysis reveals insights into the nature of how and why users seek health information, such as common interactions, instances of incomplete context, affective behaviors, and interactions (e.g., leading questions) that can induce sycophancy, underscoring the need for improvements in the healthcare support capabilities of LLMs deployed as conversational AI. Code and artifacts to retrieve our analyses and combine them into a curated dataset can be found here: https://github.com/yahskapar/HealthChat


Beyond Listenership: AI-Predicted Interventions Drive Improvements in Maternal Health Behaviours

Dasgupta, Arpan, Gharat, Sarvesh, Madhiwalla, Neha, Hegde, Aparna, Tambe, Milind, Taneja, Aparna

arXiv.org Artificial Intelligence

Automated voice calls with health information are a proven method for disseminating maternal and child health information among beneficiaries and are deployed in several programs around the world. However, these programs often suffer from beneficiary dropoffs and poor engagement. In previous work, through real-world trials, we showed that an AI model, specifically a restless bandit model, could identify beneficiaries who would benefit most from live service call interventions, preventing dropoffs and boosting engagement. However, one key question has remained open so far: does such improved listenership via AI-targeted interventions translate into beneficiaries' improved knowledge and health behaviors? We present a first study that shows not only listenership improvements due to AI interventions, but also simultaneously links these improvements to health behavior changes. Specifically, we demonstrate that AI-scheduled interventions, which enhance listenership, lead to statistically significant improvements in beneficiaries' health behaviors such as taking iron or calcium supplements in the postnatal period, as well as understanding of critical health topics during pregnancy and infancy. This underscores the potential of AI to drive meaningful improvements in maternal and child health.